17 research outputs found

    "Maybe it becomes a buddy, but do not call it a robot" - Seamless cooperation between companion robotics and smart homes

    Get PDF
    This paper describes the findings arising from ongoing qualitative usability evaluation studies on mobile companion robotics in smart home environments from two research projects focused on socio-technical innovation to support independent living (CompanionAble and Mobiserv). Key findings are described, and it is stated that the robotic companion, the smart home environment, and external services need to be seamlessly integrated to create a truly supportive and trusted system. The idea of robot personas is introduced, and based on our empirical observations, it is argued that the robot persona, rather than the physical embodiment, is the most important determinant of the degree of users' acceptance in terms of users' perceived trustability and responsiveness of the robot and therefore their sense of enhanced usability and satisfaction with such personal assistive systems. © 2011 Springer-Verlag

    Personalized web learning by joining OER

    Full text link
    We argue that quality issues and didactical concerns of MOOCs may be overcome by relying on small Open Educational Resources, joining them into concise courses by gluing them together along predefined learning pathways with proper semantic annotations. This new approach to adaptive learning does not attempt to model the learner, but rather concentrates on the learning process and established models thereof. Such a new approach does not only require conceptual work and corresponding support tools, but also a new meta data format and an engine which may interpret the semantic annotations as well as measure a learner’s response to these. The EU FP7 project INTUITEL is introduced, which employs these technologies in a novel learning environment

    The ESCAPE project : Energy-efficient Scalable Algorithms for Weather Prediction at Exascale

    Get PDF
    In the simulation of complex multi-scale flows arising in weather and climate modelling, one of the biggest challenges is to satisfy strict service requirements in terms of time to solution and to satisfy budgetary constraints in terms of energy to solution, without compromising the accuracy and stability of the application. These simulations require algorithms that minimise the energy footprint along with the time required to produce a solution, maintain the physically required level of accuracy, are numerically stable, and are resilient in case of hardware failure. The European Centre for Medium-Range Weather Forecasts (ECMWF) led the ESCAPE (Energy-efficient Scalable Algorithms for Weather Prediction at Exascale) project, funded by Horizon 2020 (H2020) under the FET-HPC (Future and Emerging Technologies in High Performance Computing) initiative. The goal of ESCAPE was to develop a sustainable strategy to evolve weather and climate prediction models to next-generation computing technologies. The project partners incorporate the expertise of leading European regional forecasting consortia, university research, experienced high-performance computing centres, and hardware vendors. This paper presents an overview of the ESCAPE strategy: (i) identify domain-specific key algorithmic motifs in weather prediction and climate models (which we term Weather & Climate Dwarfs), (ii) categorise them in terms of computational and communication patterns while (iii) adapting them to different hardware architectures with alternative programming models, (iv) analyse the challenges in optimising, and (v) find alternative algorithms for the same scheme. The participating weather prediction models are the following: IFS (Integrated Forecasting System); ALARO, a combination of AROME (Application de la Recherche a l'Operationnel a Meso-Echelle) and ALADIN (Aire Limitee Adaptation Dynamique Developpement International); and COSMO-EULAG, a combination of COSMO (Consortium for Small-scale Modeling) and EULAG (Eulerian and semi-Lagrangian fluid solver). For many of the weather and climate dwarfs ESCAPE provides prototype implementations on different hardware architectures (mainly Intel Skylake CPUs, NVIDIA GPUs, Intel Xeon Phi, Optalysys optical processor) with different programming models. The spectral transform dwarf represents a detailed example of the co-design cycle of an ESCAPE dwarf. The dwarf concept has proven to be extremely useful for the rapid prototyping of alternative algorithms and their interaction with hardware; e.g. the use of a domain-specific language (DSL). Manual adaptations have led to substantial accelerations of key algorithms in numerical weather prediction (NWP) but are not a general recipe for the performance portability of complex NWP models. Existing DSLs are found to require further evolution but are promising tools for achieving the latter. Measurements of energy and time to solution suggest that a future focus needs to be on exploiting the simultaneous use of all available resources in hybrid CPU-GPU arrangements

    The CO2 Human Emissions (CHE) Project: First steps towards a European operational capacity to monitor anthropogenic CO2 emissions

    Get PDF
    The Paris Agreement of the United Nations Framework Convention on Climate Change is a binding international treaty signed by 196 nations to limit their greenhouse gas emissions through ever-reducing Nationally Determined Contributions and a system of 5-yearly Global Stocktakes in an Enhanced Transparency Framework. To support this process, the European Commission initiated the design and development of a new Copernicus service element that will use Earth observations mainly to monitor anthropogenic carbon dioxide (CO2) emissions. The CO2 Human Emissions (CHE) project has been successfully coordinating efforts of its 22 consortium partners, to advance the development of a European CO2 monitoring and verification support (CO2MVS) capacity for anthropogenic CO2 emissions. Several project achievements are presented and discussed here as examples. The CHE project has developed an enhanced capability to produce global, regional and local CO2 simulations, with a focus on the representation of anthropogenic sources. The project has achieved advances towards a CO2 global inversion capability at high resolution to connect atmospheric concentrations to surface emissions. CHE has also demonstrated the use of Earth observations (satellite and ground-based) as well as proxy data for human activity to constrain uncertainties and to enhance the timeliness of CO2 monitoring. High-resolution global simulations (at 9 km) covering the whole of 2015 (labelled CHE nature runs) fed regional and local simulations over Europe (at 5 km and 1 km resolution) and supported the generation of synthetic satellite observations simulating the contribution of a future dedicated Copernicus CO2 Monitoring Mission (CO2M

    The ESCAPE project: Energy-efficient Scalable Algorithms for Weather Prediction at Exascale

    Get PDF
    Abstract. In the simulation of complex multi-scale flows arising in weather and climate modelling, one of the biggest challenges is to satisfy strict service requirements in terms of time to solution and to satisfy budgetary constraints in terms of energy to solution, without compromising the accuracy and stability of the application. These simulations require algorithms that minimise the energy footprint along with the time required to produce a solution, maintain the physically required level of accuracy, are numerically stable, and are resilient in case of hardware failure. The European Centre for Medium-Range Weather Forecasts (ECMWF) led the ESCAPE (Energy-efficient Scalable Algorithms for Weather Prediction at Exascale) project, funded by Horizon 2020 (H2020) under the FET-HPC (Future and Emerging Technologies in High Performance Computing) initiative. The goal of ESCAPE was to develop a sustainable strategy to evolve weather and climate prediction models to next-generation computing technologies. The project partners incorporate the expertise of leading European regional forecasting consortia, university research, experienced high-performance computing centres, and hardware vendors. This paper presents an overview of the ESCAPE strategy: (i) identify domain-specific key algorithmic motifs in weather prediction and climate models (which we term Weather & Climate Dwarfs), (ii) categorise them in terms of computational and communication patterns while (iii) adapting them to different hardware architectures with alternative programming models, (iv) analyse the challenges in optimising, and (v) find alternative algorithms for the same scheme. The participating weather prediction models are the following: IFS (Integrated Forecasting System); ALARO, a combination of AROME (Application de la Recherche à l'Opérationnel à Meso-Echelle) and ALADIN (Aire Limitée Adaptation Dynamique Développement International); and COSMO–EULAG, a combination of COSMO (Consortium for Small-scale Modeling) and EULAG (Eulerian and semi-Lagrangian fluid solver). For many of the weather and climate dwarfs ESCAPE provides prototype implementations on different hardware architectures (mainly Intel Skylake CPUs, NVIDIA GPUs, Intel Xeon Phi, Optalysys optical processor) with different programming models. The spectral transform dwarf represents a detailed example of the co-design cycle of an ESCAPE dwarf. The dwarf concept has proven to be extremely useful for the rapid prototyping of alternative algorithms and their interaction with hardware; e.g. the use of a domain-specific language (DSL). Manual adaptations have led to substantial accelerations of key algorithms in numerical weather prediction (NWP) but are not a general recipe for the performance portability of complex NWP models. Existing DSLs are found to require further evolution but are promising tools for achieving the latter. Measurements of energy and time to solution suggest that a future focus needs to be on exploiting the simultaneous use of all available resources in hybrid CPU–GPU arrangements

    A new metric for measuring the visual quality of video watermarks

    No full text
    In this paper we present an extension to the video watermarking scheme that we introduced in our previous work as well as a new objective quality metric for video watermarks. As the amount of data that today's video watermarks can embed into a single video frame still is too small for many practical applications, our watermarking scheme provides a method for splitting the watermark message and spreading it over the complete video. This way we were able to overcome the capacity limitations, but we also encountered a new kind of distortions that effects the visual quality of the video watermark, the so-called "flickering" effect. However we found that the existing video quality metrics were unable to capture the "flickering" effect. The extension of our watermarking scheme that is presented in this paper is able to reduce the "flickering" effect and thus improves the visual quality of the video watermark by using scene detection techniques. Further on we introduce a new q uality metric for measuring the "flickering" effect, which is based on the well-known SSIM metric for still images and which we call "Double SSIM Difference". Finally we present our results of the evaluation of the proposed extension of the watermark embedding process, which was applied using the "Double SSIM Difference" metric

    Multi-modal fingerprinting for multimedia assets: optimisation of media security protection in online distribution environments

    Get PDF
    Since the advent of the internet in every day life in the 1990s, the barriers to producing, distributing and consuming multimedia data such as videos, music, ebooks, etc. have steadily been lowered for most computer users so that almost everyone with internet access can join the online communities who both produce, consume and of course also share media artefacts. Along with this trend, the violation of personal data privacy and copyright has increased with illegal file sharing being rampant across many online communities particularly for certain music genres and amongst the younger age groups. This has had a devastating effect on the traditional media distribution market; in most cases leaving the distribution companies and the content owner with huge financial losses. To prove that a copyright violation has occurred one can deploy fingerprinting mechanisms to uniquely identify the property. However this is currently based on only uni-modal approaches. In this paper we describe some of the design challenges and architectural approaches to multi-modal fingerprinting currently being examined for evaluation studies within a PhD research programme on optimisation of multi-modal fingerprinting architectures. Accordingly we outline the available modalities that are being integrated through this research programme which aims to establish the optimal architecture for multi-modal media security protection over the internet as the online distribution environment for both legal and illegal distribution of media products
    corecore